16 research outputs found
The Army of One (Sample): the Characteristics of Sampling-based Probabilistic Neural Representations
There is growing evidence that humans and animals represent the uncertainty associated with sensory stimuli and utilize this uncertainty during planning and decision making in a statistically optimal way. Recently, a nonparametric framework for representing probabilistic information has been proposed whereby neural activity encodes samples from the distribution over external variables. Although such sample-based probabilistic representations have strong empirical and theoretical support, two major issues need to be clarified before they can be considered as viable candidate theories of cortical computation. First, in a fluctuating natural environment, can neural dynamics provide sufficient samples to accurately estimate a stimulus? Second, can such a code support accurate learning over biologically plausible time-scales? Although it is well known that sampling is statistically optimal if the number of samples is unlimited, biological constraints mean that estimation and learning in the cortex must be supported by a relatively small number of possibly dependent samples. We explored these issues in a cue combination task by comparing a neural circuit that employed a sampling-based representation to an optimal estimator. For static stimuli, we found that a single sample is sufficient to obtain an estimator with less than twice the optimal variance, and that performance improves with the inverse square root of the number of samples. For dynamic stimuli, with linear-Gaussian evolution, we found that the efficiency of the estimation improves significantly as temporal information stabilizes the estimate, and because sampling does not require a burn-in phase. Finally, we found that using a single sample, the dynamic model can accurately learn the parameters of the input neural populations up to a general scaling factor, which disappears for modest sample size. These results suggest that sample-based representations can support estimation and learning using a relatively small number of samples and are therefore highly feasible alternatives for performing probabilistic cortical computations.

Learning complex tasks with probabilistic population codes
Recent psychophysical experiments imply that the brain employs a neural representation of the uncertainty in sensory stimuli and that probabilistic computations are supported by the cortex. Several candidate neural codes for uncertainty have been posited including Probabilistic Population Codes (PPCs). PPCs support various versions of probabilistic inference and marginalisation in a neurally plausible manner. However, in order to establish whether PPCs can be of general use, three important limitations must be addressed. First, it is critical that PPCs support learning. For example, during cue combination, subjects are able to learn the uncertainties associated with the sensory cues as well as the prior distribution over the stimulus. However, previous modelling work with PPCs requires these parameters to be carefully set by hand. Second, PPCs must be able to support inference in non-linear models. Previous work has focused on linear models and it is not clear whether non-linear models can be implemented in a neurally plausible manner. Third, PPCs must be shown to scale to high-dimensional problems with many variables. This contribution addresses these three limitations of PPCs by establishing a connection with variational Expectation Maximisation (vEM). In particular, we show that the usual PPC update for cue combination can be interpreted as the E-Step of a vEM algorithm. The corresponding M-Step then automatically provides a method for learning the parameters of the model by adapting the connection strengths in the PPC network in an unsupervised manner. Using a version of sparse coding as an example, we show that the vEM interpretation of PPC can be extended to non-linear and multi-dimensional models and we show how the approach scales with the dimensionality of the problem. Our results provide a rigorous assessment of the ability of PPCs to capture the probabilistic computations performed in the cortex.

Recommended from our members
Benefits of active learning for teachers
Good teachers and good active learners share the ability to generate samples (examples or queries, respectively) that are informative in light of current knowledge. In line with this, the current experiment found that active learners outperformed yoked passive learners in a subsequent category teaching task. The learning task was replicated from Markant and Gurekis (2014) and included a manipulation of category structure (Rule-based or Information-Integration). Participants (N = 40 dyads) first learned how to categorize stimuli defined along two continuous perceptual features, and their subjective classification boundaries were inferred from categorization tests. During teaching, participants generated a small, fixed number of examples to teach the categorization boundary to an imagined learner. Improvements in teaching due to active learning went beyond what could be explained by better categorization performance prior to teaching, and example selection was modulated by participants’ degree of uncertainty about the boundary to-be-taught
Recommended from our members
Do humans recalibrate the confidence of advisers?
In collaborative tasks, humans can make better joint decisions by aggregating individual information in proportion to their communicated confidence (Bahrami et al., 2010). However, if people blindly rely on their partner’s confidence expressions, they could easily reach suboptimal solutions when their collaborator's confidence judgments are not calibrated to their performance, but for instance exhibit an overconfidence bias. Given that calibrated advisers are rated as more credible (Sah et al., 2013), we propose that prior experience with a collaborator will lead to a recalibration of their confidence judgements before incorporating their advice. In an online experiment, participants first viewed two other fictitious participants, one calibrated and one biased, perform a categorization task. Following this, participants completed a similar task by taking advice from just one of the two previously observed advisers on a given trial. We tested whether participants chose the adviser who had the trial-by-trial highest expressed or recalibrated confidence
Recommended from our members
Do humans recalibrate the confidence of advisers or take their confidence at face value?
Who we choose to learn from is influenced by the relative confidence of potential informants. More confident advisers are preferred based on an assumption that confidence is a good indicator of accuracy. However, oftentimes, accuracy and confidence are not calibrated, either due to strategic manipulations of confidence or unintentional failures of metacognition. When accuracy information is readily available, people are additionally vigilant to the calibration of informants, penalizing incorrect, yet confident advisers (Tenney et al., 2007). The current experiment tested whether participants can leverage inferences about two advisers' calibration profiles to make optimal trial-by-trial decisions. We predicted that choice of advisers reflects relative differences in the advisers' probability of being correct given their stated confidence (recalibrated confidence), as opposed to stated confidence differences. The prediction was not supported by data, but calibration had a modulating effect on choices, as more confident advisers were more influential only when they were also calibrated
Recommended from our members
Pre-Training Leads to a Structural Novelty Effect in Spatial Visual Statistical Learning
We investigated the influence of structural properties of previously learned stimuli on Spatial Visual Statistical Learning.
Participants (n=170) were first exposed to a stream of scenes containing only one type of regularity (horizontal or vertical pairs), followed by a stream containing both types of regularities. We found that participants performed above chance for the pairs of the first stream (M=54.7%, SE=1.2, p<0.001, BF=91.89) as well as for the novel type of pair of the second stream (M=55.6%, SE=1.9, p=0.005, BF=4.04), but not for the familiar type of pair of the second stream (M=51.5%, SE=2.0, p=0.465, BF=0.11).
This observed novelty effect indicates an interference between the similarly structured pairs in the first and second stream of scenes, suggesting representational overlap of pairs of the same orientation
Recommended from our members
Pre-Training Leads to a Structural Novelty Effect in Spatial Visual Statistical Learning
We investigated the influence of structural properties of previously learned stimuli on Spatial Visual Statistical Learning.
Participants (n=170) were first exposed to a stream of scenes containing only one type of regularity (horizontal or vertical pairs), followed by a stream containing both types of regularities. We found that participants performed above chance for the pairs of the first stream (M=54.7%, SE=1.2, p<0.001, BF=91.89) as well as for the novel type of pair of the second stream (M=55.6%, SE=1.9, p=0.005, BF=4.04), but not for the familiar type of pair of the second stream (M=51.5%, SE=2.0, p=0.465, BF=0.11).
This observed novelty effect indicates an interference between the similarly structured pairs in the first and second stream of scenes, suggesting representational overlap of pairs of the same orientation